Why “Free Speech!!!” is Spotify’s Favorite Red Herring

This is a transcript of Technically Spiritual Podcast Episode 28, which you can listen to here.

Hi everyone! Welcome to Technically Spiritual. I’m your host, Prerna Manchanda. Thanks for being here. 

We’re in early February of 2022 at the time of recording, so there’s lots of stuff on the news about Joe Rogan verses Neil Young and Joni Mitchell and other artists choosing to remove their music from Spotify if Spotify doesn’t remove Rogan’s podcast. And I’m going to discuss the specifics of this shortly. But this isn’t a new phenomenon. In fact, I encourage you to check out my previous episode, episode 13  “When Hate is Monetized”.

In that episode I covered what free speech is, how social media’s business models work, and how we can think about potentially regulating it. I want to take some time to dive a little deeper into these topics, as the conversation has certainly evolved since then, but I encourage you to check out that episode as well for a full scope on this topic.

Free Speech, Censorship, & Social Media

We’re hearing a lot right now about free speech and censorship - “Spotify can’t remove Rogan because it’s against free speech.” This isn’t actually true but we’ll get to that. “If the tech companies remove voices that oppose the mainstream narrative what does that mean for our society?” Censorship is a slippery slope. But free speech and censorship aren’t actually the root cause of the problem of misinformation and hate speech spreading on the internet - they are symptoms. And just like with most of the things that feel problematic in the age of Big Tech, if we only address the symptoms, we are never really making the changes that we need to actually create a healthier society. So, what do I mean by this?

I’ve said it before and I’ll say it again… the business models of Big Tech are the root cause to this problem. The incessant and sketchy ways that Big Tech collects data, infringes upon our privacy, and sells our information are the root cause of this problem. The fact that Big Tech has ZERO transparency about how their algorithms make decisions is a root cause of this problem. Free speech and censorship is addressing a branch, or even a leaf. Business models, data collection, and lack of transparency are the roots.

Us, Our Data, Is the Real Product

The companies I’m talking about vary from social media companies, to amazon, to google, to Netflix, to even Spotify, but it’s all generally the same with a few variations - The service is generally free or perhaps have a small fee to use, but regardless the user is the product - we are the ones being bought and sold.

Our attention is what they are seeking. It works a little differently with social media companies & Google that are completely free to use because they get paid based on advertising, but all of the companies generally optimize for engagement. These companies collect a ton of data on us, where we live, who we spend time with, what gender we identify with, what we spend money on, what locations we frequent, if we have a pet, when we are getting our periods and so, so much more. They then typically sell this data to advertisers so that they can show us highly personalized ads that we will be extremely likely to click on. 

The other part of this is the time spent on the platform. The companies only make money if we are using it, so they want us to be using it all the time. This is called engagement. Their success depends on how high engagement is - i.e. how long a given user is on the platform. What this means for the average user is that these platforms are highly addictive because the companies are fighting for our attention.

It might be in our best interest to get off our phones and go for a walk, but it’s not in Facebook’s best interest, so they’re going to keep showing you highly engaging content to ensure that you can’t keep your eyes off of it for too long. This might mean using dark patterns (I have an episode on this as well) like sending you an email that you’ve been tagged in a photo but won’t show you the photo or there’s only one left of the product you’ve been looking at you better order it quickly… but where it gets destructive is that engagement often means showing you radicalized and extreme content. 

If the company’s objective is to keep you on it for as long as possible, it knows that you will eventually get bored of seeing photos of your smiling friends, so they call on machine learning - using algorithms that adapt with more information being fed to it - to understand common patterns that ensure that users stay engaged.

And keep in mind as I said before, we don’t totally know how these algorithms work because the companies are not transparent about them and have no legal obligation to tell the public anything - even though they directly impact and shape our society - but more on this later. Unfortunately, the more extreme the content, the more users stay engaged.

When our engagement is their product, misinformation fuels profit.

Tristan Harris says, “We're never going to solve problems if they're just maximising for profit, and profit is directly tied to the mass manipulation of human behaviour and beliefs. So we have to also change the fundamental governance structure. if the business model is attention, then it's not about truth, it's just about what gets attention. And making up or exaggerating or distorting facts to serve political purposes is always going to be better for getting attention than delivering the truth.”

(https://timesofindia.indiatimes.com/india/a-15-year-old-influencer-can-now-reach-as-many-people-as-a-newspaper-but-with-none-of-the-responsibility/articleshow/78596272.cms)

Flat earth videos do better on youtube than ‘round earth’ videos for example. And its been proven that part of the reason that Trump was elected in 2016 was due to YouTube and other social media companies pushing a pro-Trump agenda and an anti-Hillary one, espeically the further down the rabbit hole you got - so, the more videos you watched that were recommended by YouTubes algorithm the more likely you were to be fed pro- trump content. There are many examples of this, and I urge you to go back to Episode 13 for more details on that, but let’s move forward. 

So let’s talk about how free speech ties into all of this, since that is what’s all over the headlines. Again, free speech isn’t the root cause. Tech companies actually benefit from everyone being obsessed with a simplistic cry for Free speech because it helps them deflect and avoid responsibility because it’s such a contentious topic. So let’s actually see what’s going on behind the headlines.

Free speech is a right expressed by the First Amendment in the Constitution in the United States, which states: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

The ACLU says: “Freedom of speech, the press, association, assembly, and petition: This set of guarantees, protected by the First Amendment, comprises what we refer to as freedom of expression. It is the foundation of a vibrant democracy, and without it, other fundamental rights, like the right to vote, would wither away.” (https://www.aclu.org/issues/free-speech)

Speakers are protected against all government agencies and officials: federal, state, and local, and legislative, executive, or judicial. The First Amendment does not protect speakers, however, against private individuals or organizations, such as private employers, private colleges, or private landowners. The First Amendment restrains only the government. And this is an extremely important distinction. 

(https://constitutioncenter.org/interactive-constitution/interpretation/amendment-i/interps/266)

A few examples of what free speech protects. The right…

  • Not to speak (specifically, the right not to salute the flag).
    West Virginia Board of Education v. Barnette, 319 U.S. 624 (1943).

  • Of students to wear black armbands to school to protest a war (“Students do not shed their constitutional rights at the schoolhouse gate.”).
    Tinker v. Des Moines, 393 U.S. 503 (1969).

  • To use certain offensive words and phrases to convey political messages.
    Cohen v. California, 403 U.S. 15 (1971).

  • To contribute money (under certain circumstances) to political campaigns.
    Buckley v. Valeo, 424 U.S. 1 (1976).

  • To advertise commercial products and professional services (with some restrictions).
    Virginia Board of Pharmacy v. Virginia Consumer Council, 425 U.S. 748 (1976); Bates v. State Bar of Arizona, 433 U.S. 350 (1977).

  • To engage in symbolic speech, (e.g., burning the flag in protest).
    Texas v. Johnson, 491 U.S. 397 (1989); United States v. Eichman, 496 U.S. 310 (1990).

Freedom of speech does not include the right:

  • To incite imminent lawless action.
    Brandenburg v. Ohio, 395 U.S. 444 (1969).

  • To make or distribute obscene materials.
    Roth v. United States, 354 U.S. 476 (1957).

  • To burn draft cards as an anti-war protest.
    United States v. O’Brien, 391 U.S. 367 (1968).

  • To permit students to print articles in a school newspaper over the objections of the school administration.
    Hazelwood School District v. Kuhlmeier, 484 U.S. 260 (1988).

  • Of students to make an obscene speech at a school-sponsored event.
    Bethel School District #43 v. Fraser, 478 U.S. 675 (1986).

  • Of students to advocate illegal drug use at a school-sponsored event.
    Morse v. Frederick, __ U.S. __ (2007).

(https://www.uscourts.gov/about-federal-courts/educational-resources/about-educational-outreach/activity-resources/what-does)

Let’s go back to an analog world for a moment and just consider what free speech meant prior to the digital age. Essentially, free speech protects the public square. Government entities cannot intervene if someone wanted to stand up on a public park bench and speak out against the government, hand out posters to join a cult, or burn the flag. However, the rules change when not in a public place. Private spaces have their own rules and different protections. You aren’t allowed to go into an elementary school playground and do those things. You aren’t allowed to go into a privately owned national park and do those things. Even though they might be “outside” they’re not quite considered the “public square”

file:///Users/prernabhatia/Downloads/kyr_guide_-_street_speechv2.pdf

So let’s think about the digital age now. The “public square” has changed dramatically. Especially in recent years with the Covid pandemic, but even prior. Some folks consider the public square to be online, as that’s where many public debates are held and where we “gather” as a collective. 

In a way, our virtual world and particularly social media has become the public square because this is where we protest, where we gather and find others and have conversations. There are some critics of this view for example, 

“Zeynep Tufekci has resisted the idea of calling the social media public squares because really their whole model is based on feeding you just information that’s really made just for you, so it’s the opposite of a public square in a way.” (https://www.vox.com/2018/11/19/18103081/first-amendment-facebook-jameel-jaffer-freedom-speech-alex-jones-decode-podcast-kara-swisher)

But what gets tricky is that if we consider for example Twitter to be the public square - we get into muddy territory because Twitter is a privately owned company. 

The reconfiguration of the public square to this online, for-profit, private companies being in charge -type of space has truly changed the game. The companies are deciding, according to ZT “how and where we can interact; with whom; and at what scale and visibility.” (https://www.wired.com/story/twitter-has-officially-replaced-the-town-square/)

First Amendment rights lawyer Jameel Jaffer says, “conversations that used to take place in spaces that were subject to the First Amendment are now taking place in spaces that are controlled by private actors and therefore not subject to the First Amendment.” 

So when people are screaming that we can’t kick Joe Rogan off Spotify because of “the first amendment” that is actually not true. Spotify has a first amendment right to regulate its platform however it wants to. But honestly, that’s scarier. Because remember, their profit is driven by our engagement, and engagement is driven by extremes, shallow headlines, misinformation, and propaganda. And when a user is scrolling, they are not about to stop, fact check, and research every post. Meanwhile, the platform gathers more data, and feeds us more of what we want.

Again, free speech limits the government, but does not limit private actors. While it may seem that Twitter, Facebook, Instagram etc are all “public spaces” because of how many users they have and how far reaching they are, they are still private companies who are free themselves to make up their own rules.

And herein lies the problem, Tech companies want to be free from any liabilities, so they use the laws to protect them and simultaneously evade them from any responsibilities. They say you can’t hold us to the same standards that you hold a newspaper, for example, because we are posting 3rd parties and we are not responsible for what our users post online. And this is currently protected by section 230 (again, refer back to episode 13 for more context on that). And then simultaneously the tech companies say ‘Oh you can’t regulate us because we are media organizations and we make editorial decisions just like a newspaper - you can’t tell newspapers how to edit their publication, so you can’t tell us what to do either. Essentially, they want to have their cake and eat it too. 

The companies have been completely unregulated for their entire existence, and now that some serious harms on society have been made public, everyone knows that this needs to change. The government feels pressure. The companies feel defensive. The public feels confused. Meanwhile, the harm continues.

When we get stuck on content creators free speech verses Big Tech censorship, we limit our thinking about the real problems and the real solutions that need contemplation and implementation. Spotify loves that people are up in arms about free speech or about a single artist, because it’s an extremely effective red herring to the real issue: that Spotify is responsible for the harm.

The bigger issue that I see with speech and Big Tech is we’re treating Big Tech like it’s neutral. A park is a neutral place. For the most part, everyone has equal access to the park, and yes maybe some folks can afford a megaphone and others can’t, generally speaking, everyone’s voices get heard if they want to speak.

But Technology is not neutral!! ESPECIALLY social media! These platforms are actually changing the nature of speech itself. Jameel Jaffer says ” …these companies have immense power to decide not just who can speak, but also who gets heard. Who can speak because they decide who gets on the platform and who doesn’t, but who can be heard because their algorithms decide what speech gets prioritized and what speech gets suppressed”

And this is not a new problem! We are just noticing it because two widely popular American white guys are currently in the spotlight. But voices have been being amplified and suppressed both intentionally and unintentionally by the platforms for YEARS and not just in the US, but globally as well. 

Disinformation researcher Camille François says “There are many other groups that have been deplatformed in the past. Some have been completely driven out of the platforms by abuse, a lot of discussion around sex workers, for instance… And as we think through who gets to be online, who gets to express their voice, we have to think about the full set of decisions that have been made by these platforms around the globe and across the years.”

(https://www.humanetech.com/podcast/31-disinformation-then-and-now)

Again, it’s really tricky and quite unclear exactly how to move forward or how to start thinking about regulating Big Tech. Jameel Jaffer says, “I’m not sure we really want a situation where Facebook is subject to the First Amendment in the same way that the government is. … it would require Facebook to allow pornography on the platform, for example. It would require Facebook to allow Constitutionally protected hate speech. Facebook would be required to host that. I’m not sure anybody would see that as a solution to the problems that we’re facing right now.”

And as I’ve mentioned before and I will continue to say – marginalized people ESPECIALLY in tech will be further marginalized if we start over censoring and monitoring. We do need to live in a world where we hear unpopular opinions outside of a mainstream narrative. We have to think about what we lose when we start taking things down and deplatforming voices. But we also can’t allow hateful and harmful content or misinformation to spread, especially by the platforms own algorithms. So who gets to decide what gets seen and heard and what doesn’t? Who gets to decide what is considered to be the truth? 

We’re already moving in the direction of fear around what is said and what is not said online. I myself as a creator have felt this. I sometimes feel afraid to speak up and share my thoughts because if I say something that could be interpreted wrongly I could be canceled. While I think that generally its great that people are holding companies, not even just big tech but in general businesses accountable to be transparent and decent and ethical, the online world is super volatile right now and if if we don’t create some boundaries around this it’s going to lead to more polarization and more violence in our world. I think we as a collective are finally starting to realize that what happens online has real life implications in our physical world. 

At the moment, the Big Tech companies are essentially regulating themselves. They’re updating their rules and policies all the time - nothing is standard. They have “content moderators” (another huge topic for another time) that supposedly set the standards, but with billions of users and so many pieces of content being uploaded every second, how is content moderation going to help – again its addressing a symptom, not the root cause. And who is regulating the content moderators? Don’t they bring all their biases and worldviews to the process - so is it fair for Facebooks own employed content moderators to be the ones making decisions about what the world gets to see and not see? 

We can’t just apply past laws to our current situation. Big Tech is a big industry, and there were laws and regulations created around television, journalism, etc., so it’s time for some new regulations to take place. And the first one has to be transparency, because if we don’t know what’s going on with how these algorithms work, with what kind of data they’re collecting and packaging up and selling we will never get to the bottom of this to make a clear game plan to move forward. 

We also need to decouple free speech with free reach (as Marie DiResta says). Just because someone has the right to say whatever they want to their own network, it doesn’t mean they have the right to have that content get shown to billions of people. Again, the algorithm doesn’t totally know what content is hateful or true, they just know that a lot of people like it, so they share it - or they know that their clients (the ad agencies) have paid a lot for it to be the first hit on Google, so they put it there regardless of how legitimate it is.

In the case of Spotify, they bought Joe Rogan’s podcast to be hosted exclusively on Spotify for 100 MILLION dollars. Spotify does not have any incentive to deplatform Joe Rogan regardless of what type of content he is sharing. He has millions of listeners and has brought over paid users to Spotify due to his exclusivity, which is good for him. Joe Rogan has more listeners than Neil Young. According to Newsweek:

JRE attracts an estimated 11 million viewers per episode with Rogan posting four to five episodes a week that can last up to three hours long. In contrast, Young attracts nearly 6,027,000 listeners a month, according to Spotify, and Mitchell receives a little over 3,738,000 listeners.

Regardless of the quality of the content, Spotify is going to keep Joe Rogan because he makes them more money.

And I don’t think that the solution is to remove Joe Rogan from Spotify - again that’s not really the issue here. Honestly, he would just start his own network and it would be unregulated and his millions of listeners would go there and all the problems are the same. Instead, what we need to be doing is start thinking critically about the implications of amplification, and demanding transparency from tech companies to understand how these decisions are getting made, so that we can move forward and maintain a democratic world. A world where truth is more important than money. A world where you are free to speak your mind even when your thoughts might oppose the mainstream opinion. A world where tech companies acknowledge their power in shaping public discourse and are required to do it ethically. There isn’t one simple solution, but there are a few things that can be done to start moving in the right direction, and it starts with transparency.

Camille François: “I think that having a meaningful field of people who are looking at it from different angles here is really important because one of the ways we tend to only focus on the handful of things that are again, like very Silicon Valley centric, very U.S. centric. And we know that these threats are global and are continuing to target civil society around the world. So I think we need to do more work… We definitely need more attention to make sure that the standards we set for ourselves are applying globally.”

(https://www.humanetech.com/podcast/31-disinformation-then-and-now)

Conclusion:

There are hard conversations to be had, and extreme groups and corporations are ready to throw red herrings left and right. But the core issue here is a lack of transparency and regulation, regardless of personal opinions about free speech and government. When each of us has so much buying power, and data to leverage, we can make a difference 

This is just the beginning... and in the meantime we can practice mindfulness in what we consume, where, and how far our power might go if we stopped fighting each other about symptoms, and challenging the root causes of our biggest societal issues.

I try not to be depressing on this podcast even though we are discsuing very real issues like our psychology and how Big Tech is manipulating us. And I know that there are a lot of issues in this world that need addressing. It’s important to think critically about these issues and if we have the bandwidth put pressure on our legislators to implement laws and regulations that would force Big Tech to change.

It’s also important not to get caught up in the drama of troll comments and to do proper research and have informed conversations. I’m grateful that you took some time to listen to my thoughts on this matter. Ethical tech research is part of my life’s work and I do my best to illustrate a holistic perspective. It’s okay to be overwhelmed, I feel this way as well. We do our best when we can and try to take care of ourselves along the way.

The bottom line for most of these companies is money. And you are in control of how you spend your money. And more importantly you are in control of where you place your attention. And that is empowering!

Previous
Previous

The Relationship Between Inner Peace & Outer Impact in a Time of Wordly Crisis

Next
Next

4 Tips for (Mindful!) Habit Building in the New Year